全文获取类型
收费全文 | 47510篇 |
免费 | 8371篇 |
国内免费 | 4967篇 |
学科分类
工业技术 | 60848篇 |
出版年
2024年 | 231篇 |
2023年 | 1356篇 |
2022年 | 2260篇 |
2021年 | 2476篇 |
2020年 | 2402篇 |
2019年 | 1851篇 |
2018年 | 1571篇 |
2017年 | 2052篇 |
2016年 | 2195篇 |
2015年 | 2525篇 |
2014年 | 3897篇 |
2013年 | 3324篇 |
2012年 | 4113篇 |
2011年 | 4328篇 |
2010年 | 3245篇 |
2009年 | 3189篇 |
2008年 | 3243篇 |
2007年 | 3507篇 |
2006年 | 2804篇 |
2005年 | 2356篇 |
2004年 | 1745篇 |
2003年 | 1424篇 |
2002年 | 1054篇 |
2001年 | 728篇 |
2000年 | 595篇 |
1999年 | 449篇 |
1998年 | 372篇 |
1997年 | 271篇 |
1996年 | 270篇 |
1995年 | 185篇 |
1994年 | 126篇 |
1993年 | 114篇 |
1992年 | 110篇 |
1991年 | 91篇 |
1990年 | 77篇 |
1989年 | 38篇 |
1988年 | 46篇 |
1987年 | 24篇 |
1986年 | 30篇 |
1985年 | 26篇 |
1984年 | 25篇 |
1983年 | 23篇 |
1982年 | 17篇 |
1981年 | 20篇 |
1980年 | 17篇 |
1979年 | 7篇 |
1978年 | 6篇 |
1976年 | 4篇 |
1975年 | 4篇 |
1951年 | 4篇 |
排序方式: 共有10000条查询结果,搜索用时 31 毫秒
991.
Yongjie Yang Shanshan Tu Raja Hashim Ali Hisham Alasmary Muhammad Waqas Muhammad Nouman Amjad 《计算机、材料和连续体(英文)》2023,74(1):801-815
With the recent developments in the Internet of Things (IoT), the amount of data collected has expanded tremendously, resulting in a higher demand for data storage, computational capacity, and real-time processing capabilities. Cloud computing has traditionally played an important role in establishing IoT. However, fog computing has recently emerged as a new field complementing cloud computing due to its enhanced mobility, location awareness, heterogeneity, scalability, low latency, and geographic distribution. However, IoT networks are vulnerable to unwanted assaults because of their open and shared nature. As a result, various fog computing-based security models that protect IoT networks have been developed. A distributed architecture based on an intrusion detection system (IDS) ensures that a dynamic, scalable IoT environment with the ability to disperse centralized tasks to local fog nodes and which successfully detects advanced malicious threats is available. In this study, we examined the time-related aspects of network traffic data. We presented an intrusion detection model based on a two-layered bidirectional long short-term memory (Bi-LSTM) with an attention mechanism for traffic data classification verified on the UNSW-NB15 benchmark dataset. We showed that the suggested model outperformed numerous leading-edge Network IDS that used machine learning models in terms of accuracy, precision, recall and F1 score. 相似文献
992.
Blockchain merges technology with the Internet of Things (IoT) for addressing security and privacy-related issues. However, conventional blockchain suffers from scalability issues due to its linear structure, which increases the storage overhead, and Intrusion detection performed was limited with attack severity, leading to performance degradation. To overcome these issues, we proposed MZWB (Multi-Zone-Wise Blockchain) model. Initially, all the authenticated IoT nodes in the network ensure their legitimacy by using the Enhanced Blowfish Algorithm (EBA), considering several metrics. Then, the legitimately considered nodes for network construction for managing the network using Bayesian-Direct Acyclic Graph (B-DAG), which considers several metrics. The intrusion detection is performed based on two tiers. In the first tier, a Deep Convolution Neural Network (DCNN) analyzes the data packets by extracting packet flow features to classify the packets as normal, malicious, and suspicious. In the second tier, the suspicious packets are classified as normal or malicious using the Generative Adversarial Network (GAN). Finally, intrusion scenario performed reconstruction to reduce the severity of attacks in which Improved Monkey Optimization (IMO) is used for attack path discovery by considering several metrics, and the Graph cut utilized algorithm for attack scenario reconstruction (ASR). UNSW-NB15 and BoT-IoT utilized datasets for the MZWB method simulated using a Network simulator (NS-3.26). Compared with previous performance metrics such as energy consumption, storage overhead accuracy, response time, attack detection rate, precision, recall, and F-measure. The simulation result shows that the proposed MZWB method achieves high performance than existing works 相似文献
993.
Shiming He Meng Guo Bo Yang Osama Alfarraj Amr Tolba Pradip Kumar Sharma Xi’ai Yan 《计算机、材料和连续体(英文)》2023,75(3):5027-5047
Sensors produce a large amount of multivariate time series data to record the states of Internet of Things (IoT) systems. Multivariate time series timestamp anomaly detection (TSAD) can identify timestamps of attacks and malfunctions. However, it is necessary to determine which sensor or indicator is abnormal to facilitate a more detailed diagnosis, a process referred to as fine-grained anomaly detection (FGAD). Although further FGAD can be extended based on TSAD methods, existing works do not provide a quantitative evaluation, and the performance is unknown. Therefore, to tackle the FGAD problem, this paper first verifies that the TSAD methods achieve low performance when applied to the FGAD task directly because of the excessive fusion of features and the ignoring of the relationship’s dynamic changes between indicators. Accordingly, this paper proposes a multivariate time series fine-grained anomaly detection (MFGAD) framework. To avoid excessive fusion of features, MFGAD constructs two sub-models to independently identify the abnormal timestamp and abnormal indicator instead of a single model and then combines the two kinds of abnormal results to detect the fine-grained anomaly. Based on this framework, an algorithm based on Graph Attention Neural Network (GAT) and Attention Convolutional Long-Short Term Memory (A-ConvLSTM) is proposed, in which GAT learns temporal features of multiple indicators to detect abnormal timestamps and A-ConvLSTM captures the dynamic relationship between indicators to identify abnormal indicators. Extensive simulations on a real-world dataset demonstrate that the proposed algorithm can achieve a higher F1 score and hit rate than the extension of existing TSAD methods with the benefit of two independent sub-models for timestamp and indicator detection. 相似文献
994.
Muhammad Irfan Ahmad Shaf Tariq Ali Umar Farooq Saifur Rahman Salim Nasar Faraj Mursal Mohammed Jalalah Samar M. Alqhtani Omar AlShorman 《计算机、材料和连续体(英文)》2023,76(1):711-729
A brain tumor is a mass or growth of abnormal cells in the brain. In children and adults, brain tumor is considered one of the leading causes of death. There are several types of brain tumors, including benign (non-cancerous) and malignant (cancerous) tumors. Diagnosing brain tumors as early as possible is essential, as this can improve the chances of successful treatment and survival. Considering this problem, we bring forth a hybrid intelligent deep learning technique that uses several pre-trained models (Resnet50, Vgg16, Vgg19, U-Net) and their integration for computer-aided detection and localization systems in brain tumors. These pre-trained and integrated deep learning models have been used on the publicly available dataset from The Cancer Genome Atlas. The dataset consists of 120 patients. The pre-trained models have been used to classify tumor or no tumor images, while integrated models are applied to segment the tumor region correctly. We have evaluated their performance in terms of loss, accuracy, intersection over union, Jaccard distance, dice coefficient, and dice coefficient loss. From pre-trained models, the U-Net model achieves higher performance than other models by obtaining 95% accuracy. In contrast, U-Net with ResNet-50 outperforms all other models from integrated pre-trained models and correctly classified and segmented the tumor region. 相似文献
995.
996.
Intrusion detection involves identifying unauthorized network activity and recognizing whether the data constitute an abnormal network transmission. Recent research has focused on using semi-supervised learning mechanisms to identify abnormal network traffic to deal with labeled and unlabeled data in the industry. However, real-time training and classifying network traffic pose challenges, as they can lead to the degradation of the overall dataset and difficulties preventing attacks. Additionally, existing semi-supervised learning research might need to analyze the experimental results comprehensively. This paper proposes XA-GANomaly, a novel technique for explainable adaptive semi-supervised learning using GANomaly, an image anomalous detection model that dynamically trains small subsets to these issues. First, this research introduces a deep neural network (DNN)-based GANomaly for semi-supervised learning. Second, this paper presents the proposed adaptive algorithm for the DNN-based GANomaly, which is validated with four subsets of the adaptive dataset. Finally, this study demonstrates a monitoring system that incorporates three explainable techniques—Shapley additive explanations, reconstruction error visualization, and t-distributed stochastic neighbor embedding—to respond effectively to attacks on traffic data at each feature engineering stage, semi-supervised learning, and adaptive learning. Compared to other single-class classification techniques, the proposed DNN-based GANomaly achieves higher scores for Network Security Laboratory-Knowledge Discovery in Databases and UNSW-NB15 datasets at 13% and 8% of F1 scores and 4.17% and 11.51% for accuracy, respectively. Furthermore, experiments of the proposed adaptive learning reveal mostly improved results over the initial values. An analysis and monitoring system based on the combination of the three explainable methodologies is also described. Thus, the proposed method has the potential advantages to be applied in practical industry, and future research will explore handling unbalanced real-time datasets in various scenarios. 相似文献
997.
998.
邢九平 《水科学与工程技术》2014,(5):73-76
通过水利工程实例,介绍了振冲碎石桩的施工原理、施工流程、常见问题、处理措施及桩体质量检测方法,数据详实,方法得当,对施工起到了很好的指导作用。施工完成后,经过处理的地基承载力提高比较明显,施工效果较好,为类似工程施工提供了借鉴。 相似文献
999.
本文介绍了中水电如东海上风电场(潮间带)100MW示范项目升压站综合楼静载荷试验异常情况分析与处理情况,提出了施工中应进行实时监测等需注意的问题。 相似文献
1000.
Motivated by recent applications of wireless sensor networks in monitoring infrastructure networks, we address the problem of optimal coverage of infrastructure networks using sensors whose sensing performance decays with distance. We show that this problem can be formulated as a continuous p-median problem on networks. The literature has addressed the discrete p-median problem on networks and in continuum domains, and the continuous p-median problem in continuum domains extensively. However, in-depth analysis of the continuous p-median problem on networks has been lacking. With the sensing performance model that decays with distance, each sensor covers a region equivalent to its Voronoi partition on the network in terms of the shortest path distance metric. Using Voronoi partitions, we define a directional partial derivative of the coverage metric with respect to a sensor’s location. We then propose a gradient descent algorithm to obtain a locally optimal solution with guaranteed convergence. The quality of an optimal solution depends on the choice of the initial configuration of sensors. We obtain an initial configuration using two approaches: by solving the discrete p-median problem on a lumped network and by random sampling. We consider two methods of random sampling: uniform sampling and D2-sampling. The first approach with the initial solution of the discrete p-median problem leads to the best coverage performance for large networks, but at the cost of high running time. We also observe that the gradient descent on the initial solution with the D2-sampling method yields a solution that is within at most 7% of the previous solution and with much shorter running time. 相似文献